- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0001000001000000
- More
- Availability
-
11
- Author / Contributor
- Filter by Author / Creator
-
-
Chen, Rosy (2)
-
Bajcsy, Andrea (1)
-
Boerkoel_Jr, James C (1)
-
Jeong, Hyun Joe (1)
-
Ma, Yiran (1)
-
Wu, Siqi (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
& Aina, D.K. Jr. (0)
-
& Akcil-Okan, O. (0)
-
& Akuom, D. (0)
-
& Aleven, V. (0)
-
& Andrews-Larson, C. (0)
-
& Archibald, J. (0)
-
- Filter by Editor
-
-
Koenig, Sven (1)
-
Stern, Roni (1)
-
Vallati, Mauro (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Goal-conditioned policies, such as those learned via imitation learning, provide an easy way for humans to influence what tasks robots accomplish. However, these robot policies are not guaranteed to execute safely or to succeed when faced with out-of-distribution goal requests. In this work, we enable robots to know when they can confidently execute a user’s desired goal, and automatically suggest safe alternatives when they cannot. Our approach is inspired by control-theoretic safety filtering, wherein a safety filter minimally adjusts a robot’s candidate action to be safe. Our key idea is to pose alternative suggestion as a safe control problem in goal space, rather than in action space. Offline, we use reachability analysis to compute a goal-parameterized reach-avoid value network which quantifies the safety and liveness of the robot’s pre- trained policy. Online, our robot uses the reach-avoid value network as a safety filter, monitoring the human’s given goal and actively suggesting alternatives that are similar but meet the safety specification. We demonstrate our Safe ALTernatives (SALT) framework in simulation experiments with indoor navigation and Franka Panda tabletop manipulation, and with both discrete and continuous goal representations. We find that SALT is able to learn to predict successful and failed closed-loop executions, is a less pessimistic monitor than open- loop uncertainty quantification, and proposes alternatives that consistently align with those that people find acceptable.more » « lessFree, publicly-accessible full text available October 25, 2026
-
Chen, Rosy; Ma, Yiran; Wu, Siqi; Boerkoel_Jr, James C (, Proceedings of the International Conference on Automated Planning and Scheduling)Koenig, Sven; Stern, Roni; Vallati, Mauro (Ed.)Probabilistic Simple Temporal Networks (PSTN) facilitate solving many interesting scheduling problems by characterizing uncertain task durations with unbounded probabilistic distributions. However, most current approaches assess PSTN performance using normal or uniform distributions of temporal uncertainty. This paper explores how well such approaches extend to families of non-symmetric distributions shown to better represent the temporal uncertainty introduced by, e.g., human teammates by building new PSTN benchmarks. We also build probability-aware variations of current approaches that are more reactive to the shape of the underlying distributions. We empirically evaluate the original and modified approaches over well-established PSTN datasets. Our results demonstrate that alignment between the planning model and reality significantly impacts performance. While our ideas for augmenting existing algorithms to better account for human-style uncertainty yield only marginal gains, our results surprisingly demonstrate that existing methods handle positively-skewed temporal uncertainty better.more » « less
An official website of the United States government
